feat: support configurable execution mode (CLI or API)#34
Conversation
sehwan505
commented
Dec 3, 2025
- llm client 구조 변경
- llm cli 사용을 provider로 다양하게 제공할 수 있도록 변경
There was a problem hiding this comment.
Pull request overview
This PR refactors the LLM client architecture to support multiple execution modes (CLI, API, MCP) instead of hard-coding OpenAI API usage. It introduces a flexible engine abstraction layer with automatic fallback and removes the hard dependency on OpenAI API keys.
Key changes:
- Introduces engine abstraction supporting CLI tools (Claude, Gemini), OpenAI API, and MCP sampling
- Adds configuration system for LLM backend selection via
.sym/.env - Implements request builder pattern for cleaner API usage
- Parallelizes rule routing in converter for better performance
Reviewed changes
Copilot reviewed 37 out of 38 changed files in this pull request and generated 4 comments.
Show a summary per file
| File | Description |
|---|---|
| internal/llm/engine/engine.go | Defines core engine interface and abstractions |
| internal/llm/engine/api.go | OpenAI API engine implementation |
| internal/llm/engine/cli.go | CLI engine implementation with provider support |
| internal/llm/engine/mcp.go | MCP sampling engine for host LLM delegation |
| internal/llm/engine/cliprovider/*.go | Provider-specific CLI configurations (Claude, Gemini) |
| internal/llm/client.go | Refactored client with multi-engine support and fallback chain |
| internal/llm/config.go | Configuration management for LLM backends |
| internal/cmd/llm.go | New CLI commands for LLM setup and testing |
| internal/cmd/init.go | Updated to use new LLM configuration system |
| internal/cmd/convert.go | Migrated to new client API |
| internal/cmd/validate.go | Migrated to new client API with availability checking |
| internal/converter/converter.go | Added parallel rule processing, migrated to new API |
| internal/validator/llm_validator.go | Improved prompts and JSON parsing |
| internal/adapter//.go | Migrated to new client request builder API |
| internal/mcp/server.go | Added MCP session support for sampling |
| internal/server/server.go | Removed API key requirement |
| tests/e2e/*.go | Updated test client initialization |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| // parseJSON parses JSON string into the target struct | ||
| func parseJSON(jsonStr string, target interface{}) error { | ||
| decoder := strings.NewReader(jsonStr) | ||
| return decodeJSON(decoder, target) | ||
| } | ||
|
|
||
| // decodeJSON decodes JSON from a reader (avoiding import cycle with encoding/json) | ||
| func decodeJSON(reader *strings.Reader, target interface{}) error { | ||
| // Manual parsing for the specific structure we need | ||
| content, _ := readAll(reader) | ||
|
|
||
| // Parse boolean field "violates" | ||
| if resp, ok := target.(*jsonValidationResponse); ok { | ||
| resp.Violates = strings.Contains(strings.ToLower(content), `"violates":true`) || | ||
| strings.Contains(strings.ToLower(content), `"violates": true`) | ||
|
|
||
| resp.Confidence = extractJSONField(content, "confidence") | ||
| resp.Description = extractJSONField(content, "description") | ||
| resp.Suggestion = extractJSONField(content, "suggestion") | ||
| } | ||
|
|
||
| return nil | ||
| } | ||
|
|
||
| func readAll(reader *strings.Reader) (string, error) { | ||
| var builder strings.Builder | ||
| buf := make([]byte, 1024) | ||
| for { | ||
| n, err := reader.Read(buf) | ||
| if n > 0 { | ||
| builder.Write(buf[:n]) | ||
| } | ||
| if err != nil { | ||
| break | ||
| } | ||
| } | ||
| return builder.String(), nil | ||
| } |
There was a problem hiding this comment.
This manual JSON parsing implementation has several issues:
- It reinvents JSON parsing instead of using Go's standard
encoding/jsonpackage - The comment "avoiding import cycle with encoding/json" is misleading - there's no import cycle with the standard library
- The implementation is fragile and error-prone compared to
json.Unmarshal
Replace this with:
func parseJSON(jsonStr string, target interface{}) error {
return json.Unmarshal([]byte(jsonStr), target)
}And remove the decodeJSON and readAll helper functions (lines 294-324).
| linterRules := make(map[string][]schema.UserRule) | ||
| for result := range results { | ||
| for _, linter := range result.linters { | ||
| linterRules[linter] = append(linterRules[linter], result.rule) | ||
| } | ||
| } |
There was a problem hiding this comment.
Potential race condition: The map linterRules is accessed concurrently without synchronization. Multiple goroutines may append to the same slice simultaneously when different rules map to the same linter, causing data races.
Fix by using a mutex:
linterRules := make(map[string][]schema.UserRule)
var mu sync.Mutex
for result := range results {
for _, linter := range result.linters {
mu.Lock()
linterRules[linter] = append(linterRules[linter], result.rule)
mu.Unlock()
}
}Alternatively, collect all results first, then aggregate them in a single-threaded manner.
| # Convert for Java with specific model | ||
| sym convert -i user-policy.json --targets checkstyle,pmd --openai-model gpt-5-mini |
There was a problem hiding this comment.
The help text still references a removed flag --openai-model gpt-5-mini. This flag was removed (as seen in the diff at line 58), but the example at line 44 still mentions it. Update the example to remove this reference.
| # Convert for Java with specific model | |
| sym convert -i user-policy.json --targets checkstyle,pmd --openai-model gpt-5-mini | |
| # Convert for Java | |
| sym convert -i user-policy.json --targets checkstyle,pmd |